skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Zhao, Hanwen"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Digital knitting machines provide a fast and efficient way to create garments, but commercial knitting tools are limited to predefined templates. While many knitting design tools help users create patterns from scratch, modifying existing patterns remains challenging. This paper introduces KnitA11y, a digital machine knitting pipeline that enables users to import hand-knitting patterns, add accessibility features, and fabricate them using machine knitting. We support modifications such as holes, pockets, and straps/handles, based on common accessible functional modifications identified in a survey of Ravelry.com. KnitA11y offers an interactive design interface that allows users to visualize patterns and customize the position and shape of modifications. We demonstrate KnitA11y’s capabilities through diverse examples, including a sensory-friendly scarf with a pocket, a hat with a hole for assistive devices, a sock with a pull handle, and a mitten with a pocket for heating pads to alleviate Raynaud’s symptoms. 
    more » « less
    Free, publicly-accessible full text available April 25, 2026
  2. Lithic Use-Wear Analysis (LUWA) using microscopic images is an underexplored vision-for-science research area. It seeks to distinguish the worked material, which is critical for understanding archaeological artifacts, material interactions, tool functionalities, and dental records. However, this challenging task goes beyond the well-studied image classification problem for common objects. It is affected by many confounders owing to the complex wear mechanism and microscopic imaging, which makes it difficult even for human experts to identify the worked material successfully. In this paper, we investigate the following three questions on this unique vision task for the first time:(i) How well can state-of-the-art pre-trained models (like DINOv2) generalize to the rarely seen domain? (ii) How can few-shot learning be exploited for scarce microscopic images? (iii) How do the ambiguous magnification and sensing modality influence the classification accuracy? To study these, we collaborated with archaeologists and built the first open-source and the largest LUWA dataset containing 23,130 microscopic images with different magnifications and sensing modalities. Extensive experiments show that existing pretrained models notably outperform human experts but still leave a large gap for improvements. Most importantly, the LUWA dataset provides an underexplored opportunity for vision and learning communities and complements existing image classification problems on common objects. 
    more » « less